Goto

Collaborating Authors

 assembly action


Subassembly to Full Assembly: Effective Assembly Sequence Planning through Graph-based Reinforcement Learning

Shu, Chang, Kim, Anton, Park, Shinkyu

arXiv.org Artificial Intelligence

This paper proposes an assembly sequence planning framework, named Subassembly to Assembly (S2A). The framework is designed to enable a robotic manipulator to assemble multiple parts in a prespecified structure by leveraging object manipulation actions. The primary technical challenge lies in the exponentially increasing complexity of identifying a feasible assembly sequence as the number of parts grows. To address this, we introduce a graph-based reinforcement learning approach, where a graph attention network is trained using a delayed reward assignment strategy. In this strategy, rewards are assigned only when an assembly action contributes to the successful completion of the assembly task. We validate the framework's performance through physics-based simulations, comparing it against various baselines to emphasize the significance of the proposed reward assignment approach. Additionally, we demonstrate the feasibility of deploying our framework in a real-world robotic assembly scenario.


Deep Reinforcement Learning for Modelling Protein Complexes

Gao, Ziqi, Feng, Tao, You, Jiaxuan, Zi, Chenyi, Zhou, Yan, Zhang, Chen, Li, Jia

arXiv.org Artificial Intelligence

AlphaFold can be used for both single-chain and multi-chain protein structure prediction, while the latter becomes extremely challenging as the number of chains increases. In this work, by taking each chain as a node and assembly actions as edges, we show that an acyclic undirected connected graph can be used to predict the structure of multi-chain protein complexes (a.k.a., protein complex modelling, PCM). To address these challenges, we propose GAPN, a Generative Adversarial Policy Network powered by domainspecific rewards and adversarial loss through policy gradient for automatic PCM prediction. Specifically, GAPN learns to efficiently search through the immense assembly space and optimize the direct docking reward through policy gradient. Importantly, we design an adversarial reward function to enhance the receptive field of our model. In this way, GAPN will simultaneously focus on a specific batch of complexes and the global assembly rules learned from complexes with varied chain numbers. Empirically, we have achieved both significant accuracy (measured by RMSD and TM-Score) and efficiency improvements compared to leading PCM softwares. AlphaFold-Multimer (Evans et al., 2021) has However, it faces difficulties in maintaining high accuracy when dealing with complexes with a larger number (> 9) of chains (Bryant et al., 2022a; Burke et al., 2023; Bryant et al., 2022b).


Fusing Hand and Body Skeletons for Human Action Recognition in Assembly

Aganian, Dustin, Köhler, Mona, Stephan, Benedict, Eisenbach, Markus, Gross, Horst-Michael

arXiv.org Artificial Intelligence

As collaborative robots (cobots) continue to gain popularity in industrial manufacturing, effective human-robot collaboration becomes crucial. Cobots should be able to recognize human actions to assist with assembly tasks and act autonomously. To achieve this, skeleton-based approaches are often used due to their ability to generalize across various people and environments. Although body skeleton approaches are widely used for action recognition, they may not be accurate enough for assembly actions where the worker's fingers and hands play a significant role. To address this limitation, we propose a method in which less detailed body skeletons are combined with highly detailed hand skeletons. We investigate CNNs and transformers, the latter of which are particularly adept at extracting and combining important information from both skeleton types using attention. This paper demonstrates the effectiveness of our proposed approach in enhancing action recognition in assembly scenarios.


AssembleRL: Learning to Assemble Furniture from Their Point Clouds

Aslan, Özgür, Bolat, Burak, Bal, Batuhan, Tümer, Tuğba, Şahin, Erol, Kalkan, Sinan

arXiv.org Artificial Intelligence

The rise of simulation environments has enabled learning-based approaches for assembly planning, which is otherwise a labor-intensive and daunting task. Assembling furniture is especially interesting since furniture are intricate and pose challenges for learning-based approaches. Surprisingly, humans can solve furniture assembly mostly given a 2D snapshot of the assembled product. Although recent years have witnessed promising learning-based approaches for furniture assembly, they assume the availability of correct connection labels for each assembly step, which are expensive to obtain in practice. In this paper, we alleviate this assumption and aim to solve furniture assembly with as little human expertise and supervision as possible. To be specific, we assume the availability of the assembled point cloud, and comparing the point cloud of the current assembly and the point cloud of the target product, obtain a novel reward signal based on two measures: Incorrectness and incompleteness. We show that our novel reward signal can train a deep network to successfully assemble different types of furniture. Code and networks available here: https://github.com/METU-KALFA/AssembleRL